Examinando por Autor "Salas, Rodrigo"
Mostrando 1 - 12 de 12
Resultados por página
Opciones de ordenación
Ítem A self-identification Neuro-Fuzzy inference framework for modeling rainfall-runoff in a Chilean watershed(Elsevier, 2021) Morales, Yerel; Querales, Marvin; Rosas, Harvey; Allende-Cid, Hector; Salas, RodrigoModeling the relationship between rainfall and runoff is an important issue in hydrology, but it is a complicated task because both the high levels of complexity in which both processes are embedded and the associated uncertainty, affect the forecasting. Neuro-fuzzy models have emerged as a useful approach, given the ability of neural networks to optimize parameters in a fuzzy system. In this work a Self-Identification Neuro-Fuzzy Inference Model (SINFIM) for modeling the relationship between rainfall and runoff on a Chilean watershed is proposed to reduce the uncertainty of selecting both the rainfall and runoff lags and the number of membership functions required in a fuzzy system. The data comes from the Diguillín river located in Ñuble region and average daily runoff and average daily rainfall recorded from years 2000 to 2018, according to the Chilean directorate of water resources (DGA). In addition, we worked with the Colorado River basin, located in the Maule region, to validate the method developed. The experimental results showed a good adjustment using the last 3 years as validation set, further improvement was achieved using only the last year was used as validation test, obtaining 84% of and Kling Gupta Efficiency, higher than other forecasting models such as Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial neural networks (ANN), and Long Short-Term Memory (LSTM) approach. In addition, Nash-Sutcliffe efficiency and percent BIAS indicate the method is a promising model. On the other hand, even better results were obtained in the validation basin, whose adjustment was 94% and an efficiency of 97%. Therefore, the proposed model is a solid alternative to forecast the runoff in a given watershed, obtaining good performance measurements, managing to predict both the low and peak runoff values from rainfall events, avoiding the requirement to determine a priori the lags of time series and the number of fuzzy rules.Ítem A Spatio-Temporal Visualization Approach of PM10 Concentration Data in Metropolitan Lima(MDPI, 2021) Encalada-Malca, Alexandra Abigail; Cochachi-Bustamante, Javier David; Canas Rodrigues, Paulo; Salas, Rodrigo; López-Gonzales, Javier LinkolkLima is considered one of the cities with the highest air pollution in Latin America. Institutions such as DIGESA, PROTRANSPORTE and SENAMHI are in charge of permanently monitoring air quality; therefore, the air quality visualization system must manage large amounts of data of different concentrations. In this study, a spatio-temporal visualization approach was developed for the exploration of data of the PM10 concentration in Metropolitan Lima, where the spatial behavior, at different time scales, of hourly concentrations of PM10 are analyzed using basic and specialized charts. The results show that the stations located to the east side of the metropolitan area had the highest concentrations, in contrast to the stations located in the center and north that reported better air quality. According to the temporal variation, the station with the highest average of biannual and annual PM10 was the HCH station. The highest PM10 concentrations were registered in 2018, during the summer, highlighting the month of March with daily averages that reached 435 μμg/m3. During the study period, the CRB was the station that recorded the lowest concentrations and the only one that met the Environmental Quality Standard for air quality. The proposed approach exposes a sequence of steps for the elaboration of charts with increasingly specific time periods according to their relevance, and a statistical analysis, such as the dynamic temporal correlation, that allows to obtain a detailed visualization of the spatio-temporal variations of PM10 concentrations. Furthermore, it was concluded that the meteorological variables do not indicate a causal relationship with respect to PM10 levels, but rather that the concentrations of particulate material are related to the urban characteristics of each district.Ítem Air quality assessment and pollution forecasting using artifcial neural networks in Metropolitan Lima‐Peru(Springer, 2021) Hoyos Cordova, Chardin; Lopez Portocarrero, Manuel Niño; Salas, Rodrigo; Torres, Romina; Canas Rodrigues, Paulo; López‐Gonzales, Javier LinkolkThe prediction of air pollution is of great importance in highly populated areas because it directly impacts both the management of the city’s economic activity and the health of its inhabitants. This work evaluates and predicts the Spatio-temporal behavior of air quality in Metropolitan Lima, Peru, using artificial neural networks. The conventional feedforward backpropagation known as Multilayer Perceptron (MLP) and the Recurrent Artificial Neural network known as Long Short-Term Memory networks (LSTM) were implemented for the hourly prediction of PM10 based on the past values of this pollutant and three meteorological variables obtained from five monitoring stations. The models were validated using two schemes: The Hold-Out and the Blocked-Nested Cross-Validation (BNCV). The simulation results show that periods of moderate PM10 concentration are predicted with high precision. Whereas, for periods of high contamination, the performance of both models, the MLP and LSTM, were diminished. On the other hand, the prediction performance improved slightly when the models were trained and validated with the BNCV scheme. The simulation results showed that the models obtained a good performance for the CDM, CRB, and SMP monitoring stations, characterized by a moderate to low level of contamination. However, the results show the difficulty of predicting this contaminant in those stations that present critical contamination episodes, such as ATE and HCH. In conclusion, the LSTM recurrent artificial neural networks with BNCV adapt more precisely to critical pollution episodes and have better predictability performance for this type of environmental data.Ítem Biological knowledge-slanted random forest approach for the classification of calcified aortic valve stenosis(Bmc, 2021) Cantor, Erika; Salas, Rodrigo; Rosas, Harvey; Guauque-Olarte, SandraBackground. Calcific aortic valve stenosis (CAVS) is a fatal disease and there is no pharmacological treatment to prevent the progression of CAVS. This study aims to identify genes potentially implicated with CAVS in patients with congenital bicuspid aortic valve (BAV) and tricuspid aortic valve (TAV) in comparison with patients having normal valves, using a knowledge-slanted random forest (RF). Results. This study implemented a knowledge-slanted random forest (RF) using information extracted from a protein-protein interactions network to rank genes in order to modify their selection probability to draw the candidate split-variables. A total of 15,191 genes were assessed in 19 valves with CAVS (BAV, n = 10; TAV, n = 9) and 8 normal valves. The performance of the model was evaluated using accuracy, sensitivity, and specificity to discriminate cases with CAVS. A comparison with conventional RF was also performed. The performance of this proposed approach reported improved accuracy in comparison with conventional RF to classify cases separately with BAV and TAV (Slanted RF: 59.3% versus 40.7%). When patients with BAV and TAV were grouped against patients with normal valves, the addition of prior biological information was not relevant with an accuracy of 92.6%. Conclusion. The knowledge-slanted RF approach reflected prior biological knowledge, leading to better precision in distinguishing between cases with BAV, TAV, and normal valves. The results of this study suggest that the integration of biological knowledge can be useful during difficult classification tasks.Ítem Image Quality Assessment to Emulate Experts’ Perception in Lumbar MRI Using Machine Learning(MDPI, 2021) Chabert, Steren; Castro, Juan Sebastian; Muñoz, Leonardo; Cox, Pablo; Riveros, Rodrigo; Vielma, Juan; Huerta, Gamaliel; Querales, Marvin; Saavedra, Carolina; Veloz, Alejandro; Salas, RodrigoMedical image quality is crucial to obtaining reliable diagnostics. Most quality controls rely on routine tests using phantoms, which do not reflect closely the reality of images obtained on patients and do not reflect directly the quality perceived by radiologists. The purpose of this work is to develop a method that classifies the image quality perceived by radiologists in MR images. The focus was set on lumbar images as they are widely used with different challenges. Three neuroradiologists evaluated the image quality of a dataset that included T1-weighting images in axial and sagittal orientation, and sagittal T2-weighting. In parallel, we introduced the computational assessment using a wide range of features extracted from the images, then fed them into a classifier system. A total of 95 exams were used, from our local hospital and a public database, and part of the images was manipulated to broaden the distribution quality of the dataset. Good recall of 82% and an area under curve (AUC) of 77% were obtained on average in testing condition, using a Support Vector Machine. Even though the actual implementation still relies on user interaction to extract features, the results are promising with respect to a potential implementation for monitoring image quality online with the acquisition process.Ítem Machine Learning techniques for Behavioral Feature Selection in Network Intrusion Detection Systems(IEEE, 2021) Martinez, Vicente; Salas, Rodrigo; Tessini, Oliver; Torres, RominaInformation systems are prone to receiving multiple types of attacks over the network. Therefore, Network Intrusion Detection Systems (NIDSs) analyze the behavior of the network traffic to detect anomalies and eventual cyberattacks. The NIDS must be able to detect these cyberattacks in an efficient and effective manner based on a set of features where it is expected that the performance depends on both the selected features and the machine learning technique used. The main goal of this work is to identify the most relevant characteristics required to detect, with a high sensitivity and precision, between normal traffic and a network intrusion, together with the most relevant features associated to the identification of a specific type of attack. In this work, a comparative study of different decision tree-based machine learning techniques combined with several feature selection techniques in order to accomplish the goal. Random Forest and the XGBoost achieved a performance that reaches up to 98.5% in the F-measure when the complete set of features were used. Results show the performance was just slightly reduced to 98% when the 10 most relevant features were used. Moreover, results also show that the model using only the 10 most relevant features was able to separately identify the type of attack with a performance of at least 90% in the F-measure. We conclude that it is possible to obtain and rank a subset of the most relevant features that characterize the intrusion pattern in the network traffic in order to support the decision of how many features to include during runtime under a real network environment.Ítem Propuesta de construcción de un modelo de optimización para la producción hospitalaria: una primera aproximación trabajo realizado como requerimiento parcial para optar al título de ingeniero civil biomédico(Universidad de Valparaíso, 2014) Canessa Aillapá, Gianinna; Cortés Tello, Erika; Arriola Vera, Alexis; Salas, RodrigoEn el presente Trabajo de Título se reflejan los estudios realizados para la enunciación de los pasos para desarrollar un modelo matemático de optimización. Se presenta de forma detallada la investigación realizada para brindar una primera aproximación de una modelación de este proceso, así como también se presenta la validación de un diagrama de procesos productivos, así como los resultados finales que brindan un primer aporte a la modelización en el ámbito hospitalario, basada en gran parte en la Guía de Pre Inversión Hospitalaria. Entre los resultados que se entregan en este trabajo son en primer lugar un estudio bibliográfico sobre distintos autores para conocer los modelos de optimización existentes además de cuáles son los pasos que estos describen para el desarrollo de uno. En segundo lugar se desarrolla un diagrama de procesos productivos validado en base a una metodología de investigación desarrollada con el análisis de diferentes criterios, y a su vez, basado en entrevistas a distintos profesionales del área de la salud de dos hospitales públicos de nuestro país. Por último se presenta un análisis de los productos entregados por una organización hospitalaria clasificados por familias, basado en la Guía de Pre-Inversión Hospitalaria, todo esto como parte de la enunciación de los pasos para el desarrollo de un modelo de optimización, análogo al análisis de las diferentes restricciones que debe cumplir dicho modelo de optimización.Ítem Propuesta de Flujo de Procesamiento utilizando Python para ajustar la Señal Electromiográfica Funcional a la Contracción Voluntaria Máxima(Colegio De Kinesiólogos De Chile, 2021) Valencia, Oscar; De La Fuente, Carlos; Guzmán-Venegas, Rodrigo; Salas, Rodrigo; Weinstein, AlejandroActualmente, el uso clínico-teórico de la electromiografía (EMG), basado en el comportamiento de los potenciales de acción registrados en el sistema musculoesquelético durante tareas funcionales, ha generado diversas áreas de conocimiento. Desde una perspectiva de investigación, los flujos de procesamientos vinculados a señales biomédicas y, en particular la EMG, son múltiples. Por ejemplo, el ajuste de una señal de EMG a la contracción voluntaria máxima es usualmente utilizada para reportar el nivel de actividad muscular. Sin embargo, en pocas ocasiones se comparten los códigos utilizados. Por otro lado, el uso de lenguajes de programación, en algunos casos, representa una barrera en el aprendizaje debido al costo de licencias y el manejo necesario de programas. En consecuencia, el uso del lenguaje Python, de libre acceso y de simple sintaxis, aparece como gran alternativa, entregando una oportunidad en la formación de diversos profesionales, tanto a nivel de pregrado como postgrado. Según lo anterior, el objetivo de este estudio fue proponer un flujo de procesamiento utilizando Python para ajustar la señal EMG funcional a la contracción voluntaria máxima.Ítem Taxonomies using the clique percolation method for building a threats observatory(IEEE, 2021) Torres, Romina; González, Nicolás; Cabrera, Mathías; Salas, RodrigoCyberattacks are increasing every day, demanding that security incident response teams proactively determine potential threats early. Although social networks such as Twitter are a rich and up-to-date source of information where users use to tweet about different topics, it is complex to efficiently and effectively obtain results that support decision-making on a specific subject, such as cyberattacks. Therefore, in this work, we propose to use an offline mining process based on the clique percolation method over a corpus of tweets in order to generate an indexed knowledge base about cyberattacks. Results are promising to observe threats under evolution. Then, to show results properly, we generate an observatory prototype to allow cybersecurity researchers to explore threats over time and space.Ítem Un método de Deep learning para el pronóstico espacio temporal de eventos sísmicos(Universidad de Valparaíso, 2021-12-29) Canales Olivares, Angello; Salas, Rodrigo; Profesora Co-guía: Velandia Muñoz, DairaLos eventos sísmicos son una de las catástrofes naturales más devastadoras para la civilización humana. Sus consecuencias son tan graves que pueden provocar daños estructurales notables, genera una gran cantidad de damnificados e incluso desencadenar otros tipos de tragedias como por ejemplo los tsunamis, esto sin mencionar la gran cantidad de muertes que pueden ocurrir producto de esto. El momento exacto en el que puede ocurrir este tipo de catástrofe es completamente incierto, en propias palabras de Charles Ritcher: “Sólo los tontos, los charlatanes y los mentirosos predicen los terremotos”(Johnson 2020). Sin embargo, la ciencia dispone en la actualidad de instrumentos y técnicas matemáticas muy avanzadas para controlar las zonas sísmicas, a pesar de esto a ciencia cierta no es posible predecir la magnitud exacta y el momento de ocurrencia. Chile es considerado, a nivel mundial, uno de los territorios que posee un alto riesgo de eventos sísmicos devastadores, siendo históricamente, el único país del mundo hasta la fecha que ha sufrido un cataclismo de magnitud 9,5 en el año 1960 en la Región de Magallanes, ocurrido las 15:11 horas del domingo 22 de mayo, según menciona un testimonio en la noticia ” Desde los relatos de sobrevivientes: 60 años del terremoto y maremoto de Valdivia”(22 de may. de 2020) Periódico Resumen, la tierra se “tragó” a algunas personas como si se hablase de una película de Hollywood. La magnitud de este evento fue tal que incluso lo describen como equiparable a 89 veces todo el arsenal nuclear existente. La nación chilena, se ubica en la placa tectónica sudamericana, en su límite occidental donde convergen las placas de Nazca y Antártica y ´estas generan una zona de subducción con la placa sudamericana. El presente proyecto busca estimar la tasa de ocurrencia sísmica en la zona de subducción de las 2 placas, utilizando una arquitectura de red neuronal profunda (DNN), en base a un enfoque del modelo espacio-temporal ETAS (secuencias de réplicas de tipo epidémico). Este modelo corresponde a una función de intensidad condicional de procesos de puntos, la cual determina la tasa de ocurrencia sísmica espacio-temporal. Este modelo considera dos tipos de sismicidad, las cuales son, la sismicidad activa o de racimo y la sismicidad de fondo, donde los eventos de fondo desencadenan los eventos de racimo. El modelo ETAS propuesto se estima mediante una técnica semiparamétrica teniendo en cuenta los componentes paramétricos y no paramétricos correspondientes a la sismicidad activada y de fondo, respectivamente. Luego, el modelo se utiliza para predecir la incidencia sísmica temporal y espacial. Dicha estimación de la tasa de ocurrencia sísmica se analizará de forma temporal, con una arquitectura de redes neuronales LSTM esta se utiliza para aprender de experiencias importantes y variaciones que han pasado en el tiempo; además de una arquitectura de red CNN, la cual se utilizará para la predicción de la probabilidad de que la intensidad máxima se produzca en un lugar determinado. Planteamiento del problema La problemática nace sobre la necesidad de desarrollar herramientas de aprendizaje profundo que permitan descubrir el comportamiento la incidencia sísmica en distintas zonas. Esto debido al interés que representa este tipo de estudios para Chile debido a la gran cantidad de actividad sísmica que presenta esta región. Hipótesis de trabajo Es posible estimar la tasa de incidencia sísmica en la región de subducción entre la placa Sud- americana y la placa de Nazca mediante el uso de redes neuronales recurrentes y el preprocesamiento de un catálogo de eventos sísmicos con el modelo ETAS. Objetivo general Estimar con una alta probabilidad la intensidad y ubicación de eventos sísmicos en la zona de subducción de la placa de Nazca con la placa Sudamericana, a partir del uso de redes neuronales y de resultados obtenidos a través del modelo ETAS Objetivos específicos 1. Obtener la tasa de ocurrencia sísmica diaria, en un periodo de tiempo de 2001-01-01 a 2021- 01-01, a partir del modelo ETAS espacio-temporal 2. Entrenar una red neuronal recurrente y una convolucional, para predecir la tasa de ocurrencia de eventos sísmicos y su ubicación en el espacio, en la zona de subducción de la placa de Nazca con la placa sudamericana.Ítem Using Machine Learning to Predict Complications in Pregnancy: A Systematic Review(Frontiers, 2022) Bertini, Ayleen; Salas, Rodrigo; Chabert, Steren; Sobrevia, Luis; Pardo, FabiánIntroduction: Artificial intelligence is widely used in medical field, and machine learning has been increasingly used in health care, prediction, and diagnosis and as a method of determining priority. Machine learning methods have been features of several tools in the fields of obstetrics and childcare. This present review aims to summarize the machine learning techniques to predict perinatal complications. Objective: To identify the applicability and performance of machine learning methods used to identify pregnancy complications. Methods: A total of 98 articles were obtained with the keywords “machine learning,” “deep learning,” “artificial intelligence,” and accordingly as they related to perinatal complications (“complications in pregnancy,” “pregnancy complications”) from three scientific databases: PubMed, Scopus, and Web of Science. These were managed on the Mendeley platform and classified using the PRISMA method. Results: A total of 31 articles were selected after elimination according to inclusion and exclusion criteria. The features used to predict perinatal complications were primarily electronic medical records (48%), medical images (29%), and biological markers (19%), while 4% were based on other types of features, such as sensors and fetal heart rate. The main perinatal complications considered in the application of machine learning thus far are pre-eclampsia and prematurity. In the 31 studies, a total of sixteen complications were predicted. The main precision metric used is the AUC. The machine learning methods with the best results were the prediction of prematurity from medical images using the support vector machine technique, with an accuracy of 95.7%, and the prediction of neonatal mortality with the XGBoost technique, with 99.7% accuracy. Conclusion: It is important to continue promoting this area of research and promote solutions with multicenter clinical applicability through machine learning to reduce perinatal complications. This systematic review contributes significantly to the specialized literature on artificial intelligence and women’s health.Ítem Wavelet-based semblance analysis to determine muscle synergy for different handstand postures of Chilean circus athletes(Taylor & Francis, 2021) Calderón-Díaz, Mailyn; Ulloa-Jiménez, Ricardo; Saavedra, Carolina; Salas, RodrigoThe handstand is an uncommon posture, highly demanding in terms of muscle and joint stability, used in sporting and artistic practices in a variety of disciplines. Despite its becoming increasingly widespread, there is no specific way to perform a handstand, and the neuromuscular organizational mechanisms involved are unknown. The objective of this study was to determine the muscle synergy of four handstand postures through a semblance analysis based on wavelets of electromyographic signals in the upper limbs of experienced circus performers between 18 and 35 year old. The results show that there is a large difference in positive and negative correlations depending on the posture, which suggests that the more asymmetrical the position of the lower limbs, the greater the number of strategies to maintain the posture. Although it is not a statistically significant data, it is observed that the posture 3 in particular, possesses the greatest number of positive correlations, which suggests it has the greatest synergy.