<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Archiving and Interchange DTD v2.3 20070202//EN" "archivearticle.dtd">
<?covid-19-tdm?>
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="methods-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Bohr. Scit.</journal-id>
<journal-title>BOHR International Journal of Smart Computing and Information Technology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Bohr. Scit.</abbrev-journal-title>
<issn pub-type="epub">2583-2026</issn>
<publisher>
<publisher-name>BOHR</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.54646/bijscit.2021.14</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Methods</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Automatic translator from Portuguese (voice and text) to Portuguese sign language</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Brito</surname> <given-names>Maeva de</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Domingues</surname> <given-names>Nuno Soares</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Polytechnic Institute of Lisbon, Lisbon School of Health Technology</institution>, <addr-line>Lisbon</addr-line>, <country>Portugal</country></aff>
<aff id="aff2"><sup>2</sup><institution>Polytechnic Institute of Lisbon, Lisbon Institute of Engineering</institution>, <addr-line>Lisbon</addr-line>, <country>Portugal</country></aff>
<author-notes>
<corresp id="c001">&#x002A;Correspondence: Nuno Soares Domingues, <email>nndomingues@gmail.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>26</day>
<month>04</month>
<year>2021</year>
</pub-date>
<volume>2</volume>
<issue>1</issue>
<fpage>21</fpage>
<lpage>27</lpage>
<history>
<date date-type="received">
<day>21</day>
<month>03</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>12</day>
<month>04</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 de Brito and Domingues.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>de Brito and Domingues</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>One of the major focuses of technology, engineering, and computer science is to solve problems and improve the quality of life in health. The relationship between health and technology has made great advances in the last decades, and many cooperations are foreseen. Some citizens still have to deal with many obstacles daily to cope with the society as it is designed. One of the obstacles citizens deal with is deaf and hearing impairment. In Portugal, there are about 100,000 to 150,000 people with some level of hearing loss, and of these, around 30,000 people use Portuguese Sign Language as their mother tongue (<xref ref-type="bibr" rid="B1">1</xref>); National Institute for Rehabilitation, (<xref ref-type="bibr" rid="B2">2</xref>). The greatest difficulties are appropriate or amend if necessary. Encountered by them are poor communication with hearing people and the need of a translator person. Communication became even more complicated with the mandatory use of face mask, respect for the maximum number of people per division and social distancing due to the COVID-19 pandemic. To diminish or even solve this issue, the authors developed an automatic voice and text translation system to Portuguese sign language with captions. Sign language will be very useful for deaf and hearing impairment citizens, and captions will be useful for deaf and hearing impairment citizens who don&#x2019;t understand sign language. For that, programming language tools were developed to create a translator with those requests. This article describes the developments and achievements. For the proof of concept, the translator started with 16 images in the database and reached a confidence level between 70 and 90%. This is an incentive to further developments that authors are continuing to produce more improvements in the developed tool.</p>
</abstract>
<kwd-group>
<kwd>hearing impaired</kwd>
<kwd>communication</kwd>
<kwd>COVID-19</kwd>
<kwd>Portuguese sign language</kwd>
<kwd>automatic translator</kwd>
</kwd-group>
<counts>
<fig-count count="10"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="9"/>
<page-count count="7"/>
<word-count count="3658"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>Introduction</title>
<p>Disability an evolving concept that results from the interaction between people with disabilities and people with behavioral, orientation or spatial barriers, which prevents a person&#x2019;s full and effective participation in society on an equal basis with others National Institute for Rehabilitation (<xref ref-type="bibr" rid="B3">3</xref>). One of these disabilities is people with deaf and hearing impairment. In Portugal, there are about 100,000 to 150,000 people with some level of hearing loss, and of these, around 30,000 people use Portuguese sign language as their mother tongue [(<xref ref-type="bibr" rid="B1">1</xref>); National Institute for Rehabilitation, (<xref ref-type="bibr" rid="B2">2</xref>)].</p>
<p>To overcome the existing gaps, it is often necessary to have empathy and think about what can be done to improve our services for disabled citizens. Being hearing impaired and having had certain negative experiences, which intensified with the COVID-19 pandemic, as a result of the mandatory use of the face mask, it made sense to create an automatic translator from voice to text and from text to Portuguese sign language (LGP), which will certainly be useful for people with the same limitations.</p>
<p>The difference between hearing impaired and deaf people lies in the depth of hearing loss and how well they can communicate (<xref ref-type="bibr" rid="B4">4</xref>). People with hearing impairment can hear certain sounds, but with difficulty; to circumvent this pathology in order to be able to communicate, they use speech in conjunction with medical devices, such as hearing aids or cochlear implants, depending on the degree of hearing loss (<xref ref-type="bibr" rid="B1">1</xref>). People with profound hearing loss, who cannot hear even using hearing aids or cochlear implants, use sign language as communication (<xref ref-type="bibr" rid="B1">1</xref>).</p>
<p>According to several authors, the greatest difficulties encountered by the deaf and hard of hearing are poor communication with the listeners, since there is a great ignorance on the part of the hearing community about the deaf community&#x2019;s own characteristics, namely, LGP, and the lack of access to properly trained interpreters (<xref ref-type="bibr" rid="B5">5</xref>).</p>
<p>In addition to the problem of poor communication between hearing-impaired people and listeners, in 2019, the COVID-19 pandemic brought even more complications with regard to communication (<xref ref-type="bibr" rid="B6">6</xref>). The mandatory use of face mask to prevent the transmission of the SARS CoV 2 virus started to have a negative impact in this context, since many deaf and hearing impaired people, even those using sign language, need to visualize facial expressions, as well as use lip reading (<xref ref-type="bibr" rid="B6">6</xref>).</p>
<p>Paper Informatics, programming and machine learning have a key role in improving the communication of hearing-impaired and deaf people in the society. The work presented here is an example of the application of an automatic translator for written text, intended for hearing- impaired people who do not know Portuguese sign language (LGP), and for LGP, intended for people who can only communicate through sign language.</p>
<p>Portuguese sign language is the sign language used by the Portuguese deaf community, which is characterized as a form of communication through hand movements and facial and body expressions that have their own vocabulary and grammar (<xref ref-type="bibr" rid="B1">1</xref>).</p>
<p>Portuguese sign language has a very specific sentence structure, quite different from the structure used in the Portuguese language (LP), as it does not use linguistic connectors and the verb is always applied in the infinitive Goncalves et al. (<xref ref-type="bibr" rid="B7">7</xref>). This way, LGP respects a subject-object-verb grammatical syntax (SOV), whereas the LP presents a subject-verb-object sentence structure (SVO).</p>
<p>In Portugal, according to the Portuguese Association of the Deaf, there are about 100,000 to 150,000 people with some level of hearing loss, and of these, about 30,000 people use LGP as their mother tongue [(<xref ref-type="bibr" rid="B1">1</xref>); National Institute for Rehabilitation, (<xref ref-type="bibr" rid="B2">2</xref>)].</p>
<p>In the last decade, there has been a special interest in the development of machine translators due to the evolution of technology, as well as a greater focus on promoting the social inclusion of the deaf community, making communication between deaf and hearing people more effective.</p>
<p>In 2016, a bidirectional translator of Portuguese sign language (called VirtualSign) was developed, a model that facilitates the access of deaf and hearing-impaired people to digital content, especially educational and learning content (<xref ref-type="bibr" rid="B8">8</xref>).</p>
<p>At an international level, it is worth mentioning the automatic translator HandTalk, which uses an avatar named &#x201C;Hugo&#x201D;. It is considered the largest automatic translation platform for sign language in the world, and it was elected in 2013 by the United Nations (UN) as the best social app in the world (<xref ref-type="bibr" rid="B9">9</xref>). However, according to a questionnaire to the deaf community in Portugal (Appendix A and Appendix B), it was possible to verify that the automatic translators that use avatars are not in fact the best method of communication, since the part of facial/body language expression is compromised because it is a crucial element in the phonology of sign languages.</p>
<p>This way, with this work, we intended to create an automatic translator that doesn&#x2019;t use an avatar in order not to compromise the parameter related to facial and corporal expression inherent to LGP. A database was created which presents several images with LGP gestures. When the system refers to the word &#x201C;nurse&#x201D;, for instance, the respective image with the &#x201C;nurse&#x201D; gesture appears together with the caption. For that to happen, it was necessary to use different programming languages in order to achieve a code that can receive the word &#x201C;nurse&#x201D; as input by voice recognition and write it in the text field, and the code will search for this word in the library, and if it finds it, it will show on the screen the image of the gesture corresponding to the word &#x201C;nurse&#x201D; with the respective legend above the image and its degree of confidence.</p>
</sec>
<sec id="S2" sec-type="materials|methods">
<title>Materials and methods</title>
<p>To develop the voice-text and text-LGP machine translator, the authors got access to a set of tools that enabled its proper functioning. <xref ref-type="fig" rid="F1">Figure 1</xref> presents the flowchart of the process, where it is possible to verify all the steps taken in order to develop the translator.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Flowchart of the developed automatic translator.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g001.tif"/>
</fig>
<p>Each block of the flowchart is briefly explained below, which receives as input a word or a set of words (sentence) entered by the user.</p>
</sec>
<sec id="S3">
<title>Text input</title>
<p>This stage uses a microphone to receive the input from the user and uses the Google Recognition tool to convert audio to text. <xref ref-type="fig" rid="F2">Figure 2</xref> shows the interface of the translation system.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Interface of the developed translation system.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g002.tif"/>
</fig>
</sec>
<sec id="S4">
<title>Sentence recognition</title>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>If the spoken sentence is not recognized: &#x201C;Sentence not found&#x201D; will be displayed on the screen.</p>
</list-item>
<list-item>
<label>2.</label>
<p>If it is recognized: The recognized sentence and its confidence level are displayed on the screen, and these two variables are stored in the computer&#x2019;s RAM. In <xref ref-type="fig" rid="F3">Figure 3</xref>, one can see the sentence &#x201C;O enfermeiro chegou ao hospital&#x201D; (it will be spoken in the SOV structure saying &#x201C;Enfermeiro hospital chegar&#x201D;).</p>
</list-item>
</list>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>Presentation of the images corresponding to the words spoken in the microphone.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g003.tif"/>
</fig>
</sec>
<sec id="S5">
<title>Split function</title>
<p>This function splits a string into an ordered list of substrings, makes those substrings into an array, and returns the array. In this case, the function splits the variable sentence into several variables (words), as exemplified next with the sentence &#x201C;hospital nurse arrive&#x201D;. If only one word is spoken in the microphone, it just returns the word itself.</p>
<p>Input (text): enfermeiro hospital chegar</p>
<p>Output (function SPLIT): &#x201C;enfermeiro,&#x201D; &#x201C;hospital,&#x201D; &#x201C;chegar&#x201D;</p>
</sec>
<sec id="S6">
<title>Database (BD)</title>
<p>After the division of the sentence into words by the SPLIT function, the stored word is compared with the words in the library list (the database library shows the words and their respective representative images, if they exist in the BD). The database presents 21 images; however, only 16 images are being recognized, because the words, &#x201C;Ainda nao,&#x201D; &#x201C;Nao ha,&#x201D; &#x201C;Nos,&#x201D; &#x201C;Dificil&#x201D; and &#x201C;OS,&#x201D; are recognized by the microphone, but they cannot match the respective images.</p>
</sec>
<sec id="S7">
<title>Existence of the word in a file name.jpg</title>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>If the word doesn&#x2019;t exist in the BD: The screen shows the symbol of the image not found, as shown in <xref ref-type="fig" rid="F4">Figure 4</xref>.
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>Symbol for the image not found.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g004.tif"/>
</fig></p>
</list-item>
<list-item>
<label>2.</label>
<p>If the word exists in the DB: Displays the corresponding image that takes the value of the word found, as in the example of the word &#x201C;enfermeira&#x201D;, shown in <xref ref-type="fig" rid="F5">Figure 5</xref>.
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>Presentation of the &#x201C;enfermeira&#x201D; image corresponding to the text recognized by the microphone.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g005.tif"/>
</fig></p>
</list-item>
</list>
</sec>
<sec id="S8">
<title>Results and discussion</title>
<p>To evaluate the machine translation system developed, we applied a questionnaire that proposes a set of tasks, which are to speak five sentences in the translator&#x2019;s microphone and, after these tasks, to answer seven questions in order to evaluate the translator and his/her performance.</p>
<p>The selected sentences are in SOV structure since the translator has a limitation regarding the ability to change the structure from SVO to SVO. Thus, the translator can recognize the voice, switch to text and display on the screen the image corresponding to that text. The evaluation is done based on the translator&#x2019;s goal, voice-to-text-to-image conversion.</p>
<p>The sentences selected and the output of each sentence are presented as follows:</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>LGP: FILHA TUA ENFERMEIRA (LP: A tua filha ei enfermeira).</p>
</list-item>
<list-item>
<label>2.</label>
<p>LGP: EU CHOCOLATE NAO GOSTAR (LP:Eu nao gosto de chocolate).</p>
</list-item>
<list-item>
<label>3.</label>
<p>LGP: FILHO MEU DOENTE (LP:O meu filho estai doente).</p>
</list-item>
<list-item>
<label>4.</label>
<p>LGP: ELES LISBOA CHEGAR (LP: Eles chegaram a Lisboa).</p>
</list-item>
<list-item>
<label>5.</label>
<p>LGP: ENFERMEIRO HOSPITAL CHEGAR (LP: O enfermeiro chegou ao hospital).</p>
</list-item>
</list>
<p>The questionnaire was answered by three people from the Associacao de Surdos do Concelho de Sintra, two of whom were born deaf and both use cochlear implants. The third respondent was born hearing, but due to measles infection, he became profoundly deaf, and he doesn&#x2019;t use any hearing aid, because he did not adapt. All respondents use LGP to communicate in their daily lives (<xref ref-type="fig" rid="F6">Figures 6</xref> to <xref ref-type="fig" rid="F10">10</xref>).</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p>Representation in LGP of the sentence &#x201C;FILHA TUA ENFERMEIRA.&#x201D;</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g006.tif"/>
</fig>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption><p>Representation in LGP of the sentence &#x201C;EU CHOCO- LATE N&#x00C3;O GOSTAR.&#x201D;</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g007.tif"/>
</fig>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption><p>Representation in LGP of the sentence &#x201C;FILHO MEU DOENTE.&#x201D;</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g008.tif"/>
</fig>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption><p>Representation in LGP of the sentence &#x201C;ELES LISBOA CHEGAR.&#x201D;</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g009.tif"/>
</fig>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption><p>Representation in LGP of the sentence &#x201C;ENFERMEIRO HOSPITAL CHEGAR.&#x201D;</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="bijscit-2021-14-g010.tif"/>
</fig>
<p>The results obtained are presented in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Results obtained in the questionnaires made by 3 people from the Associa&#x00E7;&#x00E3;o de Surdos do Concelho de Sintra.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Questions</td>
<td valign="top" align="left">Person 1: 21 years old degree<break/> of deafness: moderate</td>
<td valign="top" align="left">Person 2: 19 years old<break/> deafness level: profound</td>
<td valign="top" align="left">Person 3: 46 years old<break/> deafness level: profound</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1. How do you rate the translation system in general? (Considering 1-Bad and 5-Very Good)</td>
<td valign="top" align="left">5</td>
<td valign="top" align="left">5</td>
<td valign="top" align="left">5</td>
</tr>
<tr>
<td valign="top" align="left">2. Did you understand the sentences proposed for the assessment?</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
</tr>
<tr>
<td valign="top" align="left">3. What is your opinion about the size of the images presented?</td>
<td valign="top" align="left">Large<xref ref-type="table-fn" rid="t1fn1"><sup>1</sup></xref></td>
<td valign="top" align="left">Suitable</td>
<td valign="top" align="left">Large<xref ref-type="table-fn" rid="t1fn1"><sup>1</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">4. Were there any sentence(s) that you had more difficulty understanding?</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">No</td>
</tr>
<tr>
<td valign="top" align="left">5. Do you find an automatic image translation system useful?</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
</tr>
<tr>
<td valign="top" align="left">6. Do you think the system is useful for Portuguese Sign Language?</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
</tr>
<tr>
<td valign="top" align="left">7. Make any comments you consider important, in particular what other systems you know and suggestions for improving this system.</td>
<td valign="top" align="left">&#x2018;The images are great, but the lines should be bolder, to understand gestalt more&#x2019;.</td>
<td valign="top" align="left">&#x2018;It is important to learn Sign Language, but could put 3D animation or video&#x2019;.</td>
<td valign="top" align="left">Did not answer</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="t1fn1"><p><sup>1</sup>After a brief discussion about the questionnaire, it was explained by the respondents that when they answered large, they referred that they would prefer to visualize the gesture with a larger dimension.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>Given the answers obtained, the respondents assessed the automatic translator positively, and it is an asset to the deaf community (<xref ref-type="table" rid="T1">Table 1</xref>).</p>
<p>Constructive criticism was made at the level of images, which could present a more prominent outline in order to highlight the gesture more and could also have slightly larger images for better visualization of the gesture, if the device presenting the images is at a greater distance. It was also suggested to make the inverse translation of the proposed translation system (from LGP to text), so that listeners understand what is being gestured.</p>
<p>Respondents, despite finding the proposed translation system &#x201C;very good,&#x201D; mentioned that they have a preference for LGP interpreters or video translation systems, knowing that both have some limitations. According to them, interpreters are the first choice for them; however; it is necessary to pay for the service all the time, and the State doesn&#x2019;t help financially in this sense, making it very complicated economically. They also mentioned that, as described in the literature, SNS 24 has communication channels with deaf people through video call and webchat, but according to the respondents, they are constantly busy and end up not being able to contact any interpreter.</p>
<p>Respondents shared the same opinion about the avatar machine translators. According to them, these translators are often very confusing and cannot show emotion, as is the case of the VirtualSign avatar, where they say they cannot understand what is being gestured by the avatar, as the hand arrangement is very confusing in relation to the space. The HandTalk (Libras) translator, however, is one that respondents agree is a fairly good and perceptible translator.</p>
<p>Thus, it can be concluded that the translation system developed had a good evaluation by the respondents, both at the translation level and at the level of understanding of the figures.</p>
</sec>
<sec id="S9">
<title>Limitations of the study</title>
<p>The major limitation of this study was undoubtedly the little (practically non-existent) information available on LGP and its structure. Undoubtedly, more studies and coherent information on this theme are necessary.</p>
<p>Relative to the translation system developed, it presents some limitations, namely:</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>Of the 23 images that are in the database library, only 16 images are working correctly, because the words &#x201C;Ainda n&#x00E3;o&#x201D; and &#x201C;N&#x00E3;o h&#x00E1;&#x201D; in sign language are represented only with a gesture; however, the translator has &#x201C;read&#x201D; them as two distinct words and will look for two images &#x201C;Ainda&#x201D; and &#x201C;n&#x00E3;o&#x201D;, for example. The words &#x201C;N&#x00F3;s&#x201D;, &#x201C;Dif&#x00ED;cil&#x201D; and &#x201C;&#x00D3;S&#x201D; are recognized by the microphone, but are not matched with their respective images, as they are accented words</p>
</list-item>
<list-item>
<label>2.</label>
<p>This translator cannot switch the text recognized by the microphone from the Portuguese sentence structure (SVO) to the sign language structure (SOV).</p>
</list-item>
<list-item>
<label>3.</label>
<p>It is not possible to write directly into the text box to translate.</p>
</list-item>
</list>
</sec>
<sec id="S10">
<title>Conclusion and future prospects</title>
<p>Undoubtedly, the last decade has been marked by strong growth in the area of technology, which has had a positive impact on all sectors, namely, the health sector. However, there are still many obstacles to overcome regarding the adaptation of society to people with disabilities, namely, hearing impairment.</p>
<p>Few works have been developed, at a national level, which tried to create a system that could help this community, but two works should be highlighted, namely, &#x201C;PE2LGP: From Text to Sign Language (and vice-versa)&#x201D; by the student Ruben Santos from Tecnico de Lisboa and the &#x201C;Real-Time Bidirectional Translator of Portuguese Sign Language&#x201D; by Professor Paulo Escudeiro, responsible for the project, from Instituto Superior de Engenheiria do Porto. Both projects have some limitations, namely, the parameter related to facial and body expressions inherent to LGP and the linguistic level, because although LGP is an official language since 1997 in Portugal, there is very little information about it.</p>
<p>Thus, this study aimed to create an automatic voice and text translator for LGP, which could help hearing-impaired people, without resorting to the use of avatars in order not to compromise the parameter related to facial and corporal expressions inherent to LGP. In the first instance, in order to understand what deaf people and people who use LGP as a means of communication think about translators, a questionnaire was made to this community, and 183 answers were obtained, of which 39 were from people who had some degree of deafness. From this questionnaire, it was concluded that most of the respondents agree that the translator is a good way of inclusion for the deaf community, and what translator is more perceptible to the respondents is the translator by images.</p>
<p>In this way, an automatic translator was developed from several programming languages, and it was necessary to create a database library, where the 23 images with LGP gestures are stored. When opening the translator interface, a button allusive to a microphone appears, and when clicking on it, we can say the word or a set of words we want to translate. Having said this, the translator can receive a given word, as input, by voice recognition and write it in the text field, and the code will look for this word in the library, and if it finds it, it will show on the screen the image of the gesture corresponding to that word with the respective subtitle above the image and its confidence level.</p>
<p>Despite the great progress that the health and technology sectors have made in society in recent decades, there are still many gaps when it comes to adapting society to people with disabilities, of whatever kind. Disabled people need adapted services so that they can have access to them without feeling frustrated, put aside, or even thinking that the problem lies within themselves as individuals, when in fact the problem lies in the lack of means and funds on the part of institutions that fail to adapt services for disabled people.</p>
<p>The translation system was evaluated through a questionnaire made by three deaf people from the Associacao de Surdos do Concelho de Sintra, who use LGP as a means of communication in their daily life. They evaluated that the translator has a good performance and will be useful for the deaf community. The first part of the questionnaire consisted of five sentences that the respondents had to say in the microphone. After this stage, they answered the questionnaire with seven questions related to the good functioning and perception of the translator. The answers were quite similar, as they classified the translator as &#x201C;very good&#x201D; and as being a useful translation system for the deaf community.</p>
<p>In future studies, it would be essential to overcome the limitations presented above, as it would also be interesting to use this translator as a means of learning LGP, presenting didactic games for children and dictionaries with the words and the respective gestures to be performed. In a more advanced stage, it would also be interesting to make the translation module from LGP to text, making the translator bidirectional.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1"><label>1.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gaspar</surname> <given-names>LR.</given-names></name></person-group> <source><italic>IF2LGP&#x2013;Interprete automatico de fala em lingua portuguesa para lingua gestual portuguesa.</italic></source> (2015). Available online at: <ext-link ext-link-type="uri" xlink:href="http://hdl.handle.net/10400.8/2541">http://hdl.handle.net/10400.8/2541</ext-link> (Accessed March 13, 2022).</citation></ref>
<ref id="B2"><label>2.</label><citation citation-type="journal"><collab>National Institute for Rehabilitation.</collab><source><italic>Inquerito national as incapaci- dades, deficiencias e desvantagens:sintese.</italic></source> (1996). Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.inr.pt/cadernos">https://www.inr.pt/cadernos</ext-link> (Accessed March 28, 2022).</citation></ref>
<ref id="B3"><label>3.</label><citation citation-type="journal"><collab>National Institute for Rehabilitation.</collab><source><italic>Glossario - INR, I.P.</italic> Instituto Nacional Para a Reabilitacao</source>. (2021). Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.inr.pt/glossario">https://www.inr.pt/glossario</ext-link> (Accessed March 3, 2022).</citation></ref>
<ref id="B4"><label>4.</label><citation citation-type="journal"><collab>World Health Organization [WHO].</collab> <source><italic>World report on hearing. In World Health Organization.</italic></source> (2021). Available online at:<ext-link ext-link-type="uri" xlink:href="https://www.who.int/publications/i/item/world-report-on-hearing">https://www.who.int/publications/i/item/world-report-on-hearing</ext-link> (Accessed December 27, 2022).</citation></ref>
<ref id="B5"><label>5.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Araujo De Oliveira</surname> <given-names>YC</given-names></name> <name><surname>Deysny De Matos Celino</surname> <given-names>S</given-names></name> <name><surname>Cavalcanti Costa</surname> <given-names>GM</given-names></name></person-group>. <article-title>Comunicagao como ferramenta essencial para assistencia a saude dos surdos.</article-title> <source><italic>Rev Saude Colet.</italic></source> (<year>2015</year>) <volume>25</volume>:<fpage>307</fpage>&#x2013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1590/S0103-73312015000100017</pub-id></citation></ref>
<ref id="B6"><label>6.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>ten Hulzen</surname> <given-names>RD</given-names></name> <name><surname>Fabry</surname> <given-names>DA</given-names></name></person-group>. <article-title>Impact of hearing loss and universal face masking in the COVID-19 era.</article-title> <source><italic>Mayo Clin Proc.</italic></source> (<year>2020</year>) <volume>95</volume>:<fpage>2069</fpage>&#x2013;<lpage>72</lpage>. <pub-id pub-id-type="doi">10.1016/j.mayocp.2020.07.027</pub-id></citation></ref>
<ref id="B7"><label>7.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goncalves</surname></name><etal/></person-group> (<year>2021</year>)</citation></ref>
<ref id="B8"><label>8.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Escudeiro</surname> <given-names>P</given-names></name> <name><surname>Escudeiro</surname> <given-names>N</given-names></name> <name><surname>Reis</surname> <given-names>R</given-names></name> <name><surname>Lopes</surname> <given-names>J</given-names></name> <name><surname>Norberto</surname> <given-names>M</given-names></name> <name><surname>Baltasar</surname> <given-names>AB</given-names></name><etal/></person-group> <article-title>Virtual sign - a real time bidirectional translator of Portuguese sign language.</article-title> <source><italic>Procedia Comput Sci.</italic></source> (<year>2015</year>) <volume>67</volume>:<fpage>252</fpage>&#x2013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1016/j.procs.2015.09.269</pub-id></citation></ref>
<ref id="B9"><label>9.</label><citation citation-type="journal"><collab>Hand Talk.</collab><source><italic>Discover the largest Sign Language translation plataform in the world.</italic></source> (2022). Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.handtalk.me/br/sobre/">https://www.handtalk.me/br/sobre/</ext-link> (Accessed May 2, 2022).</citation></ref>
</ref-list>
</back>
</article>
